7,023 research outputs found

    The Catholic Physician, Health and Holiness

    Get PDF

    Compensation of the skin effect in low-frequency potential drop measurements

    Get PDF
    Potential drop measurements are routinely used in the non-destructive evaluation of component integrity. Potential drop measurements use either direct current (DC) or alternating current (AC), the latter will have superior noise performance due to the ability to perform phase sensitive detection and the reduction of flicker noise. AC measurements are however subject to the skin effect where the current is electromagnetically constricted to the surface of the component. Unfortunately, the skin effect is a function of magnetic permeability, which in ferromagnetic materials is sensitive to a number of parameters including stress and temperature, and consequently in-situ impedance measurements are likely to be unstable. It has been proposed that quasi-DC measurements, which benefit from superior noise performance, but also tend to the skin-effect independent DC measurement, be adopted for in-situ creep measurements for power station components. Unfortunately, the quasi-DC measurement will only tend to the DC distribution and therefore some remnant sensitivity to the skin effect will remain. This paper will present a correction for situations where the remnant sensitivity to the skin effect is not adequately suppressed by using sufficiently low frequency; the application of particular interest being the in-situ monitoring of the creep strain of power station components. The correction uses the measured phase angle to approximate the influence of the skin effect and allow recovery of the DC-asymptotic value of the resistance. The basis of the correction, that potential drop measurements are minimum phase is presented and illustrated on two cases; the creep strain sensor of practical interest and a conducting rod as another common case to illustrate generality. The correction is demonstrated experimentally on a component where the skin effect is manipulated by application of a range of elastic stresses

    Felony Murder in New York

    Get PDF

    Particle Filtering and Smoothing Using Windowed Rejection Sampling

    Full text link
    "Particle methods" are sequential Monte Carlo algorithms, typically involving importance sampling, that are used to estimate and sample from joint and marginal densities from a collection of a, presumably increasing, number of random variables. In particular, a particle filter aims to estimate the current state XnX_{n} of a stochastic system that is not directly observable by estimating a posterior distribution π(xny1,y2,,yn)\pi(x_{n}|y_{1},y_{2}, \ldots, y_{n}) where the {Yn}\{Y_{n}\} are observations related to the {Xn}\{X_{n}\} through some measurement model π(ynxn)\pi(y_{n}|x_{n}). A particle smoother aims to estimate a marginal distribution π(xiy1,y2,,yn)\pi(x_{i}|y_{1},y_{2}, \ldots, y_{n}) for 1i<n1 \leq i < n. Particle methods are used extensively for hidden Markov models where {Xn}\{X_{n}\} is a Markov chain as well as for more general state space models. Existing particle filtering algorithms are extremely fast and easy to implement. Although they suffer from issues of degeneracy and "sample impoverishment", steps can be taken to minimize these problems and overall they are excellent tools for inference. However, if one wishes to sample from a posterior distribution of interest, a particle filter is only able to produce dependent draws. Particle smoothing algorithms are complicated and far less robust, often requiring cumbersome post-processing, "forward-backward" recursions, and multiple passes through subroutines. In this paper we introduce an alternative algorithm for both filtering and smoothing that is based on rejection sampling "in windows" . We compare both speed and accuracy of the traditional particle filter and this "windowed rejection sampler" (WRS) for several examples and show that good estimates for smoothing distributions are obtained at no extra cost

    Stable coronary syndromes: pathophysiology, diagnostic advances and therapeutic need

    Get PDF
    The diagnostic management of patients with angina pectoris typically centres on the detection of obstructive epicardial CAD, which aligns with evidence-based treatment options that include medical therapy and myocardial revascularisation. This clinical paradigm fails to account for the considerable proportion (approximately one-third) of patients with angina in whom obstructive CAD is excluded. This common scenario presents a diagnostic conundrum whereby angina occurs but there is no obstructive CAD (ischaemia and no obstructive coronary artery disease—INOCA). We review new insights into the pathophysiology of angina whereby myocardial ischaemia results from a deficient supply of oxygenated blood to the myocardium, due to various combinations of focal or diffuse epicardial disease (macrovascular), microvascular dysfunction or both. Macrovascular disease may be due to the presence of obstructive CAD secondary to atherosclerosis, or may be dynamic due to a functional disorder (eg, coronary artery spasm, myocardial bridging). Pathophysiology of coronary microvascular disease may involve anatomical abnormalities resulting in increased coronary resistance, or functional abnormalities resulting in abnormal vasomotor tone. We consider novel clinical diagnostic techniques enabling new insights into the causes of angina and appraise the need for improved therapeutic options for patients with INOCA. We conclude that the taxonomy of stable CAD could improve to better reflect the heterogeneous pathophysiology of the coronary circulation. We propose the term ‘stable coronary syndromes’ (SCS), which aligns with the well-established terminology for ‘acute coronary syndromes’. SCS subtends a clinically relevant classification that more fully encompasses the different diseases of the epicardial and microvascular coronary circulation

    Depth from Monocular Images using a Semi-Parallel Deep Neural Network (SPDNN) Hybrid Architecture

    Get PDF
    Deep neural networks are applied to a wide range of problems in recent years. In this work, Convolutional Neural Network (CNN) is applied to the problem of determining the depth from a single camera image (monocular depth). Eight different networks are designed to perform depth estimation, each of them suitable for a feature level. Networks with different pooling sizes determine different feature levels. After designing a set of networks, these models may be combined into a single network topology using graph optimization techniques. This "Semi Parallel Deep Neural Network (SPDNN)" eliminates duplicated common network layers, and can be further optimized by retraining to achieve an improved model compared to the individual topologies. In this study, four SPDNN models are trained and have been evaluated at 2 stages on the KITTI dataset. The ground truth images in the first part of the experiment are provided by the benchmark, and for the second part, the ground truth images are the depth map results from applying a state-of-the-art stereo matching method. The results of this evaluation demonstrate that using post-processing techniques to refine the target of the network increases the accuracy of depth estimation on individual mono images. The second evaluation shows that using segmentation data alongside the original data as the input can improve the depth estimation results to a point where performance is comparable with stereo depth estimation. The computational time is also discussed in this study.Comment: 44 pages, 25 figure

    Summary of electrical component development for a 400-hertz Brayton energy conversion system

    Get PDF
    Design, fabrication, and testing of 12-kilowatt inductor alternator, voltage regulator-exciter, and parasitic loading speed controller - summar

    aristotle's demonstrative logic

    Get PDF
    Demonstrative logic, the study of demonstration as opposed to persuasion, is the subject of Aristotle's two-volume Analytics. Many examples are geometrical. Demonstration produces knowledge (of the truth of propositions). Persuasion merely produces opinion. Aristotle presented a general truth-and-consequence conception of demonstration meant to apply to all demonstrations. According to him, a demonstration, which normally proves a conclusion not previously known to be true, is an extended argumentation beginning with premises known to be truths and containing a chain of reasoning showing by deductively evident steps that its conclusion is a consequence of its premises. In particular, a demonstration is a deduction whose premises are known to be true. Aristotle's general theory of demonstration required a prior general theory of deduction presented in the Prior Analytics. His general immediate-deduction-chaining conception of deduction was meant to apply to all deductions. According to him, any deduction that is not immediately evident is an extended argumentation that involves a chaining of intermediate immediately evident steps that shows its final conclusion to follow logically from its premises. To illustrate his general theory of deduction, he presented an ingeniously simple and mathematically precise special case traditionally known as the categorical syllogisti

    In Hot Pursuit of the Hidden Companion of Eta Carinae: An X-ray Determination of the Wind Parameters

    Get PDF
    We present X-ray spectral fits to a recently obtained Chandra grating spectrum of Eta Carinae, one of the most massive and powerful stars in the Galaxy and which is strongly suspected to be a colliding wind binary system. Hydrodynamic models of colliding winds are used to generate synthetic X-ray spectra for a range of mass-loss rates and wind velocities. They are then fitted against newly acquired Chandra grating data. We find that due to the low velocity of the primary wind (~500 km/s), most of the observed X-ray emission appears to arise from the shocked wind of the companion star. We use the duration of the lightcurve minimum to fix the wind momentum ratio at 0.2. We are then able to obtain a good fit to the data by varying the mass-loss rate of the companion and the terminal velocity of its wind. We find that Mdot ~ 1e-5 Msol/yr and v ~ 3000 km/s. With observationally determined values of ~500-700 km/s for the velocity of the primary wind, our fit implies a primary mass-loss rate of Mdot ~ 2.5e-4 Msol/yr. This value is smaller than commonly inferred, although we note that a lower mass-loss rate can reduce some of the problems noted by Hillier et al. (2001) when a value as high as 1e-3 Msol/yr is used. The wind parameters of the companion are indicative of a massive star which may or may not be evolved. The line strengths appear to show slightly sub-solar abundances, although this needs further confirmation. Based on the over-estimation of the X-ray line strengths in our model, and re-interpretation of the HST/FOS results, it appears that the homunculus nebula was produced by the primary star.Comment: 12 pages, 7 figures, accepted by A&
    corecore